The Margin Perceptron with Unlearning

نویسندگان

  • Constantinos Panagiotakopoulos
  • Petroula Tsampouka
چکیده

We introduce into the classical Perceptron algorithm with margin a mechanism of unlearning which in the course of the regular update allows for a reduction of possible contributions from “very well classified” patterns to the weight vector. The resulting incremental classification algorithm, called Margin Perceptron with Unlearning (MPU), provably converges in a finite number of updates to any desirable chosen before running approximation of either the maximal margin or the optimal 1-norm soft margin solution. Moreover, an experimental comparative evaluation involving representative linear Support Vector Machines reveals that the MPU algorithm is very competitive.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

The Perceptron with Dynamic Margin

The classical perceptron rule provides a varying upper bound on the maximum margin, namely the length of the current weight vector divided by the total number of updates up to that time. Requiring that the perceptron updates its internal state whenever the normalized margin of a pattern is found not to exceed a certain fraction of this dynamic upper bound we construct a new approximate maximum ...

متن کامل

Preventing Unlearning During On-Line Training of Feedforward Networks1

Interference in neural networks occurs when learning in one area of the input space causes unlearning in another area. These interference problems are especially prevalent in on-line applications where learning is directed by training data that is currently available rather than some optimal presentation schedule of the training data. We propose a procedure that enhances a learning algorithm by...

متن کامل

The Role of Weight Shrinking in Large Margin Perceptron Learning

We introduce into the classical perceptron algorithm with margin a mechanism that shrinks the current weight vector as a first step of the update. If the shrinking factor is constant the resulting algorithm may be regarded as a margin-error-driven version of NORMA with constant learning rate. In this case we show that the allowed strength of shrinking depends on the value of the maximum margin....

متن کامل

Statistical Mechanics of On-line Ensemble Teacher Learning through a Novel Perceptron Learning Rule

In ensemble teacher learning, ensemble teachers have only uncertain information about the true teacher, and this information is given by an ensemble consisting of an infinite number of ensemble teachers whose variety is sufficiently rich. In this learning, a student learns from an ensemble teacher that is iteratively selected randomly from a pool of many ensemble teachers. An interesting point ...

متن کامل

Flexible Margin Selection for Reranking with Full Pairwise Samples

Perceptron like large margin algorithms are introduced for the experiments with various margin selections. Compared to the previous perceptron reranking algorithms, the new algorithms use full pairwise samples and allow us to search for margins in a larger space. Our experimental results on the data set of (Collins, 2000) show that a perceptron like ordinal regression algorithm with uneven marg...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2010